Policy Gradient Methods for Robot Control

نویسندگان

  • Jan Peters
  • Sethu Vijayakumar
  • Stefan Schaal
چکیده

Reinforcement learning offers the most general framework to take traditional robotics towards true autonomy and versatility. However, applying reinforcement learning to high dimensional movement systems like humanoid robots remains an unsolved problem. In this paper, we discuss different approaches of reinforcement learning in terms of their applicability in humanoid robotics. Methods can be coarsely classified into three different categories, i.e., greedy methods, ‘vanilla’ policy gradient methods, and natural gradient methods. We discuss that greedy methods are not likely to scale into the domain humanoid robotics as they are problematic when used with function approximation. ‘Vanilla’ policy gradient methods on the other hand have been successfully applied on a real-world robot. We demonstrate that these methods can be significantly improved using the natural policy gradient instead of the regular policy gradient. Proofs are provided that Kakade’s average of the natural gradient [10] is indeed the true natural gradient. A general algorithm for estimating the natural gradient, the Natural Actor-Critic algorithm, is introduced. This algorithm converges with probability one to the nearest local minimum in Riemannian space of the cost function. The algorithm outperforms non-natural policy gradients by far in a cart-pole balancing evaluation, and offers a promising route for the development of reinforcement learning for truly high-dimensionally continuous state-action systems.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Reinforcement Learning for Humanoid Robotics

Reinforcement learning offers one of the most general framework to take traditional robotics towards true autonomy and versatility. However, applying reinforcement learning to high dimensional movement systems like humanoid robots remains an unsolved problem. In this paper, we discuss different approaches of reinforcement learning in terms of their applicability in humanoid robotics. Methods ca...

متن کامل

Policy Learning - A Unified Perspective with Applications in Robotics

Policy Learning approaches are among the best suited methods for high-dimensional, continuous control systems such as anthropomorphic robot arms and humanoid robots. In this paper, we show two contributions: firstly, we show a unified perspective which allows us to derive several policy learning algorithms from a common point of view, i.e, policy gradient algorithms, naturalgradient algorithms ...

متن کامل

Policy gradient approach to multi-robot learning

In theory, the formalism and methods of reinforcement learning (RL) can be applied to address any optimal control task, yielding optimal solutions while requiring very little a priori information on the system itself. However, in practice, RL methods suffer from the “curse of dimensionality” and exhibit limited applicability in complex control problems. Unfortunately, many actual control proble...

متن کامل

Policy Gradients with Parameter-Based Exploration for Control

We present a model-free reinforcement learning method for partially observable Markov decision problems. Our method estimates a likelihood gradient by sampling directly in parameter space, which leads to lower variance gradient estimates than those obtained by policy gradient methods such as REINFORCE. For several complex control tasks, including robust standing with a humanoid robot, we show t...

متن کامل

Discrete time robust control of robot manipulators in the task space using adaptive fuzzy estimator

This paper presents a discrete-time robust control for electrically driven robot manipulators in the task space. A novel discrete-time model-free control law is proposed by employing an adaptive fuzzy estimator for the compensation of the uncertainty including model uncertainty, external disturbances and discretization error. Parameters of the fuzzy estimator are adapted to minimize the estimat...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2003